163 research outputs found

    Partial-Candidate Commit for Chinese Pinyin Text Entry

    Get PDF
    This publication describes systems and techniques directed to committing a partial candidate for a coding-style language. Key codes representing a coding-style language are input through a user interface to a computing device. In one aspect, the key codes may be pinyin text for translation to an output-language of Chinese characters. The computing device generates output-language candidates that are representative of the key codes. An output-language candidate is identified that represents an intended communication relative to the key codes. A portion of the identified output-language candidate is selected to commit to the intended communication. This partial selection of the output-language candidate is completed through the user interface (e.g., a swipe gesture, a tap gesture). The user interface commits to acceptance only that partial selection of the output-language candidate for the intended communication

    TWO-HANDED TYPING METHOD ON AN ARBITRARY SURFACE

    Get PDF
    A computing device may detect user input, such as finger movements resembling typing on an invisible virtual keyboard in the air or on any surface, to enable typing. The computing device may use sensors (e.g., accelerometers, cameras, piezoelectric sensors, etc.) to detect the user’s finger movements, such as the user’s fingers moving through the air and/or contacting a surface. The computing device may then decode (or, in other words, convert, interpret, analyze, etc.) the detected finger movements to identify corresponding inputs representative of characters (e.g., alphanumeric characters, national characters, special characters, etc.). To reduce input errors, the computing device may decode the detected finger movements, at least in part, based on contextual information, such as preceding characters, words, and/or the like entered via previously detected user inputs. Similarly, the computing device may apply machine learning techniques and adjust parameters, such as a signal-to-noise ratio, to improve the accuracy of input-entry. In some examples, the computing device may implement specific recognition, prediction, and correction algorithms to improve the accuracy of input-entry. In this way, the computing device may accommodate biasing in finger movements that may be specific to a user entering the input

    Visual Depiction of Voice Message Content

    Get PDF
    Voice messages, while faster to input, are inconvenient if the message recipient is in a situation where they are not able to listen to the message. This disclosure describes techniques that, with permission, automatically analyze incoming voice messages and provide visual information within a messaging application to enable the user to understand the contents of the incoming voice messages at a glance. The glanceable visual information is derived by generating concise text of the message content, determining the message sentiment, analyzing the prosody and emotion of the voice, and adding emojis that correspond to the content, prosody, and emotion. The visual information is provided attached to the voice message in a user interface, which enables recipients to know about message contents immediately, without having to listen to the voice message. The techniques can thus improve the convenience and user experience of interacting with incoming voice messages within any application or platform

    DETECTING GESTURES UTILIZING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect gestures or user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects gestures or user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or the case of the device. The techniques described enable a computing device to utilize a standard, existing motion sensor (e.g., an inertial measurement unit (IMU), accelerometer, gyroscope, etc.) to detect the user input and determine attributes of the user input. Motion data generated by the motion sensor (also referred to as a movement sensor) is processed by an artificial neural network to infer attributes of the user input. In other words, the computing device applies a machine-learned model to the motion data (also referred to as sensor data or motion sensor data) to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    TACTILE TEXTURES FOR BACK OF SCREEN GESTURE DETECTION USING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect gestures or user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects gestures or user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or the case of the device. A tactile texture is applied to a housing of the computing device or a case that is coupled to the housing. The tactile texture causes the computing device to move in response to a user input applied to the tactile texture, such as when a user’s finger slides over the tactile texture. A motion sensor (e.g., an inertial measurement unit (IMU), accelerometer, gyroscope, etc.) generates motion data in response to detecting the motion of the computing device. The motion data is processed by an artificial neural network to infer attributes of the user input. In other words, the computing device applies a machine-learned model to the motion data (also referred to as sensor data or motion sensor data) to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    DETECTING ATTRIBUTES OF USER INPUTS UTILIZING MOTION SENSOR DATA AND MACHINE LEARNING

    Get PDF
    A computing device is described that uses motion data from motion sensors to detect user inputs, such as out-of-screen user inputs for mobile devices. In other words, the computing device detects user touch inputs at locations of the device that do not include a touch screen, such as anywhere on the surface of the housing or case of the device. The techniques described enable a computing device to utilize a standard, existing motion sensor (e.g., an inertial measurement unit, (IMU), accelerometer, gyroscope, etc.) to detect the user input and determine attributes of the user input. Motion data generated by the motion sensor (also referred to as a movement sensor) is processed by an artificial neural network to infer characteristics or attributes of the user input, including: a location on the housing where the input was detected, a surface of the housing where the input was detected (e.g., front, back, and edges, such as top, bottom and sides); a type of user input (e.g., finger, stylus, fingernail, finger pad, etc.). In other words, the computing device applies a machine-learned model to the sensor data to classify or label the various attributes, characteristics, or qualities of the input. In this way, the computing device utilizes machine learning and motion data to classify attributes of the user input or gesture utilizing motion sensors without the need for additional hardware, such as touch-sensitive devices and sensors

    VISUAL AUDIO MESSAGES

    Get PDF
    Computing devices (e.g., a cellular phone, a smartphone, a desktop computer, a laptop computer, a tablet computer, a portable gaming device, a watch, etc.). may enable users to exchange electronic communication including both a recorded message, such as an audio recording, a video recording, etc., as well as a transcript of the recorded message. In some examples, a first computing device may record audio from a first user and perform speech-to-text to generate a transcript of the recorded audio. The first computing device may then send the recorded message and the transcript of the recorded message in a single electronic communication to a second computing device (e.g., being used by a second user). Because the electronic communication includes the recorded message and the transcript of the recorded message, the second user can both listen to and read the recorded message, which may improve consumption of the recorded message (e.g., because background noise may make listening to the recorded message difficult, reading a transcript of the recorded message may be faster than listening to the recorded message, etc.). To facilitate a hands-free user experience, the computing device may include a voice user interface (VUI) by which a user may compose the electronic communication. For example, the user may provide voice commands (e.g., “clear”, “send”, “browse”, etc.) to cause the computing device to perform corresponding functions with respect to the electronic communication. Furthermore, the computing device may provide one or more instructions for using voice commands. In some cases, the instructions may relate to the action currently being taken by the user, a context of the electronic communication, etc

    Beyond Fitts’ law: Models for trajectory-based HCI tasks.

    Get PDF
    ABSTRACT Trajectory-based interactions, such as navigating through nested-menus, drawing curves, and moving in 3D worlds, are becoming common tasks in modern computer interfaces. Users' performances in these tasks cannot be successfully modeled with Fitts' law as it has been applied to pointing tasks. Therefore we explore the possible existence of robust regularities in trajectory-based tasks. We used "steering through tunnels" as our experimental paradigm to represent such tasks, and found that a simple "steering law" indeed exists. The paper presents the motivation, analysis, a series of four experiments, and the applications of the steering law

    New Mexico Lobo, Volume 046, No 32, 3/10/1944

    Get PDF
    New Mexico Lobo, Volume 046, No 32, 3/10/1944https://digitalrepository.unm.edu/daily_lobo_1944/1006/thumbnail.jp
    • …
    corecore